21 research outputs found

    Tribes Is Hard in the Message Passing Model

    Get PDF
    We consider the point-to-point message passing model of communication in which there are kk processors with individual private inputs, each nn-bit long. Each processor is located at the node of an underlying undirected graph and has access to private random coins. An edge of the graph is a private channel of communication between its endpoints. The processors have to compute a given function of all their inputs by communicating along these channels. While this model has been widely used in distributed computing, strong lower bounds on the amount of communication needed to compute simple functions have just begun to appear. In this work, we prove a tight lower bound of Ω(kn)\Omega(kn) on the communication needed for computing the Tribes function, when the underlying graph is a star of k+1k+1 nodes that has kk leaves with inputs and a center with no input. Lower bound on this topology easily implies comparable bounds for others. Our lower bounds are obtained by building upon the recent information theoretic techniques of Braverman et.al (FOCS'13) and combining it with the earlier work of Jayram, Kumar and Sivakumar (STOC'03). This approach yields information complexity bounds that is of independent interest

    Towards Better Separation between Deterministic and Randomized Query Complexity

    Get PDF
    We show that there exists a Boolean function FF which observes the following separations among deterministic query complexity (D(F))(D(F)), randomized zero error query complexity (R0(F))(R_0(F)) and randomized one-sided error query complexity (R1(F))(R_1(F)): R1(F)=O~(D(F))R_1(F) = \widetilde{O}(\sqrt{D(F)}) and R0(F)=O~(D(F))3/4R_0(F)=\widetilde{O}(D(F))^{3/4}. This refutes the conjecture made by Saks and Wigderson that for any Boolean function ff, R0(f)=Ω(D(f))0.753..R_0(f)=\Omega({D(f)})^{0.753..}. This also shows widest separation between R1(f)R_1(f) and D(f)D(f) for any Boolean function. The function FF was defined by G{\"{o}}{\"{o}}s, Pitassi and Watson who studied it for showing a separation between deterministic decision tree complexity and unambiguous non-deterministic decision tree complexity. Independently of us, Ambainis et al proved that different variants of the function FF certify optimal (quadratic) separation between D(f)D(f) and R0(f)R_0(f), and polynomial separation between R0(f)R_0(f) and R1(f)R_1(f). Viewed as separation results, our results are subsumed by those of Ambainis et al. However, while the functions considerd in the work of Ambainis et al are different variants of FF, we work with the original function FF itself.Comment: Reference adde

    Weighted Min-Cut: Sequential, Cut-Query and Streaming Algorithms

    Get PDF
    Consider the following 2-respecting min-cut problem. Given a weighted graph GG and its spanning tree TT, find the minimum cut among the cuts that contain at most two edges in TT. This problem is an important subroutine in Karger's celebrated randomized near-linear-time min-cut algorithm [STOC'96]. We present a new approach for this problem which can be easily implemented in many settings, leading to the following randomized min-cut algorithms for weighted graphs. * An O(mlog2nloglogn+nlog6n)O(m\frac{\log^2 n}{\log\log n} + n\log^6 n)-time sequential algorithm: This improves Karger's O(mlog3n)O(m \log^3 n) and O(m(log2n)log(n2/m)loglogn+nlog6n)O(m\frac{(\log^2 n)\log (n^2/m)}{\log\log n} + n\log^6 n) bounds when the input graph is not extremely sparse or dense. Improvements over Karger's bounds were previously known only under a rather strong assumption that the input graph is simple [Henzinger et al. SODA'17; Ghaffari et al. SODA'20]. For unweighted graphs with parallel edges, our bound can be improved to O(mlog1.5nloglogn+nlog6n)O(m\frac{\log^{1.5} n}{\log\log n} + n\log^6 n). * An algorithm requiring O~(n)\tilde O(n) cut queries to compute the min-cut of a weighted graph: This answers an open problem by Rubinstein et al. ITCS'18, who obtained a similar bound for simple graphs. * A streaming algorithm that requires O~(n)\tilde O(n) space and O(logn)O(\log n) passes to compute the min-cut: The only previous non-trivial exact min-cut algorithm in this setting is the 2-pass O~(n)\tilde O(n)-space algorithm on simple graphs [Rubinstein et al., ITCS'18] (observed by Assadi et al. STOC'19). In contrast to Karger's 2-respecting min-cut algorithm which deploys sophisticated dynamic programming techniques, our approach exploits some cute structural properties so that it only needs to compute the values of O~(n)\tilde O(n) cuts corresponding to removing O~(n)\tilde O(n) pairs of tree edges, an operation that can be done quickly in many settings.Comment: Updates on this version: (1) Minor corrections in Section 5.1, 5.2; (2) Reference to newer results by GMW SOSA21 (arXiv:2008.02060v2), DEMN STOC21 (arXiv:2004.09129v2) and LMN 21 (arXiv:2102.06565v1

    Low-Carbohydrate High-Fat (LCHF) Diet: Evidence of Its Benefits

    Get PDF
    Current dietary recommendations state that there is insufficient evidence to prescribe an exact percentage of calories from carbohydrate, protein and fat for people with diabetes from the choice of a variety of popular diets currently available. Over the years, many a research has focused on the relative importance of the right proportion of carbohydrates and fat combination in a balanced diabetic diet. Jury is still out regarding the relative merits and demerits of a diabetic diet – low carbohydrate, high fat or low fat, high carbohydrate diet. Evidence from various studies suggest that low carbohydrate diets improve cardiovascular (CVD) risk through lowering HbA1c levels, improving blood pressure and body weight. There is also a positive effect on lipid profile and reversal of non-alcoholic fatty liver disease (NAFLD). Whilst there are some significant metabolic benefits of LCHF diet, it is accepted that there needs to be more long-term studies before it can be used in daily clinical practice.This chapter focuses on basic physiology and metabolism of carbohydrate and fat content in normal and diabetic patients and a review of the literature on these two diet combinations with current thoughts and evidence on this core issue affecting insulin utilization and metabolic profile

    Lower Bounds for Elimination via Weak Regularity

    Get PDF
    We consider the problem of elimination in communication complexity, that was first raised by Ambainis et al. and later studied by Beimel et al. for its connection to the famous direct sum question. In this problem, let f: {0,1}^2n -> {0,1} be any boolean function. Alice and Bob get k inputs x_1, ..., x_k and y_1, ..., y_k respectively, with x_i,y_i in {0,1}^n. They want to output a k-bit vector v, such that there exists one index i for which v_i is not equal f(x_i,y_i). We prove a general result lower bounding the randomized communication complexity of the elimination problem for f using its discrepancy. Consequently, we obtain strong lower bounds for the functions Inner-Product and Greater-Than, that work for exponentially larger values of k than the best previous bounds. To prove our result, we use a pseudo-random notion called regularity that was first used by Raz and Wigderson. We show that functions with small discrepancy are regular. We also observe that a weaker notion, that we call weak-regularity, already implies hardness of elimination. Finally, we give a different proof, borrowing ideas from Viola, to show that Greater-Than is weakly regular

    Work-Optimal Parallel Minimum Cuts for Non-Sparse Graphs

    Full text link
    We present the first work-optimal polylogarithmic-depth parallel algorithm for the minimum cut problem on non-sparse graphs. For mn1+ϵm\geq n^{1+\epsilon} for any constant ϵ>0\epsilon>0, our algorithm requires O(mlogn)O(m \log n) work and O(log3n)O(\log^3 n) depth and succeeds with high probability. Its work matches the best O(mlogn)O(m \log n) runtime for sequential algorithms [MN STOC 2020, GMW SOSA 2021]. This improves the previous best work by Geissmann and Gianinazzi [SPAA 2018] by O(log3n)O(\log^3 n) factor, while matching the depth of their algorithm. To do this, we design a work-efficient approximation algorithm and parallelize the recent sequential algorithms [MN STOC 2020; GMW SOSA 2021] that exploit a connection between 2-respecting minimum cuts and 2-dimensional orthogonal range searching.Comment: Updates on this version: Minor corrections for the previous and our resul

    Nearly Optimal Communication and Query Complexity of Bipartite Matching

    Full text link
    We settle the complexities of the maximum-cardinality bipartite matching problem (BMM) up to poly-logarithmic factors in five models of computation: the two-party communication, AND query, OR query, XOR query, and quantum edge query models. Our results answer open problems that have been raised repeatedly since at least three decades ago [Hajnal, Maass, and Turan STOC'88; Ivanyos, Klauck, Lee, Santha, and de Wolf FSTTCS'12; Dobzinski, Nisan, and Oren STOC'14; Nisan SODA'21] and tighten the lower bounds shown by Beniamini and Nisan [STOC'21] and Zhang [ICALP'04]. We also settle the communication complexity of the generalizations of BMM, such as maximum-cost bipartite bb-matching and transshipment; and the query complexity of unique bipartite perfect matching (answering an open question by Beniamini [2022]). Our algorithms and lower bounds follow from simple applications of known techniques such as cutting planes methods and set disjointness.Comment: Accepted in FOCS 202
    corecore